Goto

Collaborating Authors

 future scenario


Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning

Neural Information Processing Systems

Understanding cognitive processes in multi-agent interactions is a primary goal in cognitive science. It can guide the direction of artificial intelligence (AI) research toward social decision-making in multi-agent systems, which includes uncertainty from character heterogeneity. In this paper, we introduce for a reinforcement learning (RL) agent, inspired by the cognitive processes observed in animals. To enable future thinking functionality, we first develop a that captures diverse characters with an ensemble of heterogeneous policies. The of an agent is defined as a different weight combination on reward components, representing distinct behavioral preferences.


ProgD: Progressive Multi-scale Decoding with Dynamic Graphs for Joint Multi-agent Motion Forecasting

Gao, Xing, Huang, Zherui, Lin, Weiyao, Sun, Xiao

arXiv.org Artificial Intelligence

Accurate motion prediction of surrounding agents is crucial for the safe planning of autonomous vehicles. Recent advancements have extended prediction techniques from individual agents to joint predictions of multiple interacting agents, with various strategies to address complex interactions within future motions of agents. However, these methods overlook the evolving nature of these interactions. To address this limitation, we propose a novel progressive multi-scale decoding strategy, termed ProgD, with the help of dynamic heterogeneous graph-based scenario modeling. In particular, to explicitly and comprehensively capture the evolving social interactions in future scenarios, given their inherent uncertainty, we design a progressive modeling of scenarios with dynamic heterogeneous graphs. With the unfolding of such dynamic heterogeneous graphs, a factorized architecture is designed to process the spatio-temporal dependencies within future scenarios and progressively eliminate uncertainty in future motions of multiple agents. Furthermore, a multi-scale decoding procedure is incorporated to improve on the future scenario modeling and consistent prediction of agents' future motion. Introduction Motion prediction is important for self-driving systems to ensure safe and efficient navigation. Of particular interest is joint multi-agent motion prediction, which involves concurrently forecasting the future trajectories of all agents within a scene. This task has gained increasing attention recently, due to its complexity compared to marginal motion prediction, as it requires maintaining consistency and coherence in future motions of interactive agents, reflecting the intricate dynamics of real-world traffic. Without such consistency, the prediction module could produce conflicting trajectories, such as collisions between the predicted motions of agents, which would undermine the reliability of the system and lead to unsafe or infeasible motion plans. The challenge is further compounded by several intrinsic factors: (1) Dynamic and complex future social interactions.


Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning

Neural Information Processing Systems

Understanding cognitive processes in multi-agent interactions is a primary goal in cognitive science. It can guide the direction of artificial intelligence (AI) research toward social decision-making in multi-agent systems, which includes uncertainty from character heterogeneity. In this paper, we introduce episodic future thinking (EFT) mechanism for a reinforcement learning (RL) agent, inspired by the cognitive processes observed in animals. To enable future thinking functionality, we first develop a multi-character policy that captures diverse characters with an ensemble of heterogeneous policies. The character of an agent is defined as a different weight combination on reward components, representing distinct behavioral preferences.


FutureVision: A methodology for the investigation of future cognition

Torrent, Tiago Timponi, Turner, Mark, Hinrichs, Nicolás, Belcavello, Frederico, Lourenço, Igor, Almeida, Arthur Lorenzi, Viridiano, Marcelo, Matos, Ely Edison

arXiv.org Artificial Intelligence

This paper presents a methodology combining multimodal semantic analysis with an eye-tracking experimental protocol to investigate the cognitive effort involved in understanding the communication of future scenarios. To demonstrate the methodology, we conduct a pilot study examining how visual fixation patterns vary during the evaluation of valence and counterfactuality in fictional ad pieces describing futuristic scenarios, using a portable eye tracker. Participants eye movements are recorded while evaluating the stimuli and describing them to a conversation partner. Gaze patterns are analyzed alongside semantic representations of the stimuli and participants descriptions, constructed from a frame semantic annotation of both linguistic and visual modalities. Preliminary results show that far-future and pessimistic scenarios are associated with longer fixations and more erratic saccades, supporting the hypothesis that fractures in the base spaces underlying the interpretation of future scenarios increase cognitive load for comprehenders.


ProSpec RL: Plan Ahead, then Execute

Liu, Liangliang, Guan, Yi, Wang, BoRan, Shen, Rujia, Lin, Yi, Kong, Chaoran, Yan, Lian, Jiang, Jingchi

arXiv.org Artificial Intelligence

Imagining potential outcomes of actions before execution helps agents make more informed decisions, a prospective thinking ability fundamental to human cognition. However, mainstream model-free Reinforcement Learning (RL) methods lack the ability to proactively envision future scenarios, plan, and guide strategies. These methods typically rely on trial and error to adjust policy functions, aiming to maximize cumulative rewards or long-term value, even if such high-reward decisions place the environment in extremely dangerous states. To address this, we propose the Prospective (ProSpec) RL method, which makes higher-value, lower-risk optimal decisions by imagining future n-stream trajectories. Specifically, ProSpec employs a dynamic model to predict future states (termed "imagined states") based on the current state and a series of sampled actions. Furthermore, we integrate the concept of Model Predictive Control and introduce a cycle consistency constraint that allows the agent to evaluate and select the optimal actions from these trajectories. Moreover, ProSpec employs cycle consistency to mitigate two fundamental issues in RL: augmenting state reversibility to avoid irreversible events (low risk) and augmenting actions to generate numerous virtual trajectories, thereby improving data efficiency. We validated the effectiveness of our method on the DMControl benchmarks, where our approach achieved significant performance improvements. Code will be open-sourced upon acceptance.


The Impact of AI on Perceived Job Decency and Meaningfulness: A Case Study

Ghosh, Kuntal, Sadeghian, Shadan

arXiv.org Artificial Intelligence

The proliferation of Artificial Intelligence (AI) in workplaces stands to change the way humans work, with job satisfaction intrinsically linked to work life. Existing research on human-AI collaboration tends to prioritize performance over the experiential aspects of work. In contrast, this paper explores the impact of AI on job decency and meaningfulness in workplaces. Through interviews in the Information Technology (IT) domain, we not only examined the current work environment, but also explored the perceived evolution of the workplace ecosystem with the introduction of an AI. Findings from the preliminary exploratory study reveal that respondents tend to visualize a workplace where humans continue to play a dominant role, even with the introduction of advanced AIs. In this prospective scenario, AI is seen as serving as a complement rather than replacing the human workforce. Furthermore, respondents believe that the introduction of AI will maintain or potentially increase overall job satisfaction.


Deep Ensembles to Improve Uncertainty Quantification of Statistical Downscaling Models under Climate Change Conditions

González-Abad, Jose, Baño-Medina, Jorge

arXiv.org Artificial Intelligence

Recently, deep learning has emerged as a promising tool for statistical downscaling, the set of methods for generating high-resolution climate fields from coarse low-resolution variables. Nevertheless, their ability to generalize to climate change conditions remains questionable, mainly due to the stationarity assumption. We propose deep ensembles as a simple method to improve the uncertainty quantification of statistical downscaling models. By better capturing uncertainty, statistical downscaling models allow for superior planning against extreme weather events, a source of various negative social and economic impacts. Since no observational future data exists, we rely on a pseudo reality experiment to assess the suitability of deep ensembles for quantifying the uncertainty of climate change projections. Deep ensembles allow for a better risk assessment, highly demanded by sectoral applications to tackle climate change.


X-Risk Analysis for AI Research

Hendrycks, Dan, Mazeika, Mantas

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind the potential benefits of AI, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we provide a guide for how to analyze AI x-risk, which consists of three parts: First, we review how systems can be made safer today, drawing on time-tested concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions. Next, we discuss strategies for having long-term impacts on the safety of future systems. Finally, we discuss a crucial concept in making AI systems safer by improving the balance between safety and general capabilities. We hope this document and the presented concepts and tools serve as a useful guide for understanding how to analyze AI x-risk.


Researching EU regulation around AI

AIHub

Developments in the field of artificial intelligence (AI) are moving quickly. The EU is working hard to establish rules around AI and to determine which systems are welcome and which are not. But how does the EU do this when the biggest players, the US and China, often have different ethical views? Political economist Daniel Mügge and his team will conduct research into how the EU conducts its'AI diplomacy' and will sketch potential future scenarios. "Our research is essentially about regulation around AI", says political economist Daniel Mügge.

  Country:
  Industry:

'Creating scenarios of what should be possible tomorrow': Givaudan develops 'advanced' futurescaping platform

#artificialintelligence

Givaudan has developed Consumer Foresight, a new tool that aims to help its customers co-create and innovate. This'futurescaping' platform will leverage big data, artificial intelligence and Givaudan's'deep expertise' of the food and beverage sector. It is a step beyond the trend forecasting models of today, Taste & Wellbeing President Louie D'Amico believes. "Most trend forecasting models largely focus on understanding the past and the present. Customer Foresight will be more predictive, with an ability to create potential future scenarios of what should be possible tomorrow to shape the future of food," he told FoodNavigator.